The search functionality is under construction.

Keyword Search Result

[Keyword] neural networks(287hit)

161-180hit(287hit)

  • Competitive Learning Algorithms Founded on Adaptivity and Sensitivity Deletion Methods

    Michiharu MAEDA  Hiromi MIYAJIMA  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E83-A No:12
      Page(s):
    2770-2774

    This paper describes two competitive learning algorithms from the viewpoint of deleting mechanisms of weight (reference) vectors. The techniques are termed the adaptivity and sensitivity deletions participated in the criteria of partition error and distortion error, respectively. Experimental results show the effectiveness of the proposed approaches in the average distortion.

  • On a Weight Limit Approach for Enhancing Fault Tolerance of Feedforward Neural Networks

    Naotake KAMIURA  Teijiro ISOKAWA  Yutaka HATA  Nobuyuki MATSUI  Kazuharu YAMATO  

     
    PAPER-Fault Tolerance

      Vol:
    E83-D No:11
      Page(s):
    1931-1939

    To enhance fault tolerance ability of the feedforward neural networks (NNs for short) implemented in hardware, we discuss the learning algorithm that converges without adding extra neurons and a large amount of extra learning time and cycles. Our algorithm modified from the standard backpropagation algorithm (SBPA for short) limits synaptic weights of neurons in range during learning phase. The upper and lower bounds of the weights are calculated according to the average and standard deviation of them. Then our algorithm reupdates any weight beyond the calculated range to the upper or lower bound. Since the above enables us to decrease the standard deviation of the weights, it is useful in enhancing fault tolerance. We apply NNs trained with other algorithms and our one to a character recognition problem. It is shown that our one is superior to other ones in reliability, extra learning time and/or extra learning cycles. Besides we clarify that our algorithm never degrades the generalization ability of NNs although it coerces the weights within the calculated range.

  • Hand Gesture Recognition Using T-CombNET: A New Neural Network Model

    Marcus Vinicius LAMAR  Md. Shoaib BHUIYAN  Akira IWATA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E83-D No:11
      Page(s):
    1986-1995

    This paper presents a new neural network structure, called Temporal-CombNET (T-CombNET), dedicated to the time series analysis and classification. It has been developed from a large scale Neural Network structure, CombNET-II, which is designed to deal with a very large vocabulary, such as Japanese character recognition. Our specific modifications of the original CombNET-II model allow it to do temporal analysis, and to be used in large set of human movements recognition system. In T-CombNET structure one of most important parameter to be set is the space division criterion. In this paper we analyze some practical approaches and present an Interclass Distance Measurement based criterion. The T-CombNET performance is analyzed applying to in a practical problem, Japanese Kana finger spelling recognition. The obtained results show a superior recognition rate when compared to different neural network structures, such as Multi-Layer Perceptron, Learning Vector Quantization, Elman and Jordan Partially Recurrent Neural Networks, CombNET-II, k-NN, and the proposed T-CombNET structure.

  • Image Compression by New Sub-Image Block Classification Techniques Using Neural Networks

    Newaz M. S. RAHIM  Takashi YAHAGI  

     
    LETTER-Image

      Vol:
    E83-A No:10
      Page(s):
    2040-2043

    A new method of classification of sub-image blocks for digital image compression purposes using neural network is proposed. Two different classification algorithms are used to show their greater effectiveness than the conventional classification techniques. Simulation results are presented which demonstrate the effectiveness of the new technique.

  • The Determination of the Evoked Potential Generating Mechanism Based on Radial Basis Neural Network Model

    Rustu Murat DEMIRER  Yukio KOSUGI  Halil Ozcan GULCUR  

     
    LETTER-Biocybernetics, Neurocomputing

      Vol:
    E83-D No:9
      Page(s):
    1819-1823

    This paper investigates the modeling of non-linearity on the generation of the single trial evoked potential signal (s-EP) by means of using a mixed radial basis function neural network (M-RBFN). The more emphasis is put on the contribution of spontaneous EEG term to s-EP signal. The method is based on a nonlinear M-RBFN neural network model that is trained simultaneously with the different segments of EEG/EP data. Then, the output of the trained model (estimator) is a both fitted and reduced (optimized) nonlinear model and then provide a global representation of the passage dynamics between spontaneous brain activity and poststimulus periods. The performance of the proposed neural network method is evaluated using a realistic simulation and applied to a real EEG/EP measurement.

  • A Novel Competitive Learning Technique for the Design of Variable-Rate Vector Quantizers with Reproduction Vector Training in the Wavelet Domain

    Wen-Jyi HWANG  Maw-Rong LEOU  Shih-Chiang LIAO  Chienmin OU  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:9
      Page(s):
    1781-1789

    This paper presents a novel competitive learning algorithm for the design of variable-rate vector quantizers (VQs). The algorithm, termed variable-rate competitive learning (VRCL) algorithm, designs a VQ having minimum average distortion subject to a rate constraint. The VRCL performs the weight vector training in the wavelet domain so that required training time is short. In addition, the algorithm enjoys a better rate-distortion performance than that of other existing VQ design algorithms and competitive learning algorithms. The learning algorithm is also more insensitive to the selection of initial codewords as compared with existing design algorithms. Therefore, the VRCL algorithm can be an effective alternative to the existing variable-rate VQ design algorithms for the applications of signal compression.

  • Majority Algorithm: A Formation for Neural Networks with the Quantized Connection Weights

    Cheol-Young PARK  Koji NAKAJIMA  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1059-1065

    In this paper, we propose the majority algorithm to choose the connection weights for the neural networks with quantized connection weights of 1 and 0. We also obtained the layered network to solve the parity problem with the input of arbitrary number N through an application of this algorithm. The network can be expected to have the same ability of generalization as the network trained with learning rules. This is because it is possible to decide the connection weights, regardless of the size of the training set. One can decide connection weights without learning according to our case study. Thus, we expect that the proposed algorithm may be applied for a real-time processing.

  • Active Vision System Based on Human Eye Saccadic Movement

    Sang-Woo BAN  Jun-Ki CHO  Soon-Ki JUNG  Minho LEE  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1066-1074

    We propose a new active vision system that mimics a saccadic movement of human eye. It is implemented based on a new computational model using neural networks. In this model, the visual pathway was divided in order to categorize a saccadic eye movement into three parts, each of which was then individually modeled using different neural networks to reflect a principal functionality of brain structures related with the saccadic eye movement in our brain. Initially, the visual cortex for saccadic eye movements was modeled using a self-organizing feature map, then a modified learning vector quantization network was applied to imitate the activity of the superior colliculus relative to a visual stimulus. In addition, a multilayer recurrent neural network, which is learned by an evolutionary computation algorithm, was used to model the visual pathway from the superior colliculus to the oculomotor neurons. Results from a computer simulation show that the proposed computational model is effective in mimicking the human eye movements during a saccade. Based on the proposed model, an active vision system using a CCD type camera and motor system was developed and demonstrated with experimental results.

  • A Training Algorithm for Multilayer Neural Networks of Hard-Limiting Units with Random Bias

    Hongbing ZHU  Kei EGUCHI  Toru TABATA  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1040-1048

    The conventional back-propagation algorithm cannot be applied to networks of units having hard-limiting output functions, because these functions cannot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hard-limiting output functions, is presented. In order to get a differentiable output function for a hard-limiting unit, we utilized that if the bias of a unit in such a network is a random variable with smooth distribution function, the probability of the unit's output being in a particular state is a continuously differentiable function of the unit's inputs. Three simulation results are given, which show that the performance of this algorithm is similar to that of the conventional back-propagation.

  • Neural Networks Learning Differential Data

    Ryusuke MASUOKA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E83-D No:6
      Page(s):
    1291-1300

    In many of machine learning problems, it is essential to use not only the training data, but also a priori knowledge about how the world is constrained. In many cases, such knowledge is given in the forms of constraints on differential data or more specifically partial differential equations (PDEs). Neural networks with capabilities to learn differential data can take advantage of such knowledge and easily incorporate such constraints into the learning of training value data. In this paper, we report a structure, an algorithm, and results of experiments on neural networks learing differential data.

  • Shape from Focus Using Multilayer Feedforward Neural Networks

    Muhammad ASIF  Tae-Sun CHOI  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:4
      Page(s):
    946-949

    The conventional shape from focus (SFF) methods have inaccuracies because of piecewise constant approximation of the focused image surface (FIS). We propose a more accurate scheme for SFF based on representation of three-dimensional FIS in terms of neural network weights. The neural networks are trained to learn the shape of the FIS that maximizes the focus measure.

  • A Constructive Compound Neural Networks. II Application to Artificial Life in a Competitive Environment

    Jianjun YAN  Naoyuki TOKUDA  Juichi MIYAMICHI  

     
    PAPER-Artificial Intelligence, Cognitive Science

      Vol:
    E83-D No:4
      Page(s):
    845-856

    We have developed a new efficient neural network-based algorithm for Alife application in a competitive world whereby the effects of interactions among organisms are evaluated in a weak form by exploiting the position of nearest food elements into consideration but not the positions of the other competing organisms. Two online learning algorithms, an instructive ASL (adaptive supervised learning) and an evaluative feedback-oriented RL (reinforcement learning) algorithm developed have been tested in simulating Alife environments with various neural network algorithms. The constructive compound neural network algorithm FuzGa guided by the ASL learning algorithm has proved to be most efficient among the methods experimented including the classical constructive cascaded CasCor algorithm of [18],[19] and the fixed non-constructive fuzzy neural networks. Adopting an adaptively selected best sequence of feedback action period Δα which we have found to be a decisive parameter in improving the network efficiency, the ASL-guided FuzGa had a performance of an averaged fitness value of 541.8 (standard deviation 48.8) as compared with 500(53.8) for ASL-guided CasCor and 489.2 (39.7) for RL-guided FuzGa. Our FuzGa algorithm has also outperformed the CasCor in time complexity by 31.1%. We have elucidated how the dimensionless parameter food availability FA representing the intensity of interactions among the organisms relates to a best sequence of the feedback action period Δα and an optimal number of hidden neurons for the given configuration of the networks. We confirm that the present solution successfully evaluates the effect of interactions at a larger FA, reducing to an isolated solution at a lower value of FA. The simulation is carried out by thread functions of Java by ensuring the randomness of individual activities.

  • Introduction of Orthonormal Transform into Neural Filter for Accelerating Convergence Speed

    Isao NAKANISHI  Yoshio ITOH  Yutaka FUKUI  

     
    LETTER

      Vol:
    E83-A No:2
      Page(s):
    367-370

    As the nonlinear adaptive filter, the neural filter is utilized to process the nonlinear signal and/or system. However, the neural filter requires large number of iterations for convergence. This letter presents a new structure of the multi-layer neural filter where the orthonormal transform is introduced into all inter-layers to accelerate the convergence speed. The proposed structure is called the transform domain neural filter (TDNF) for convenience. The weights are basically updated by the Back-Propagation (BP) algorithm but it must be modified since the error back-propagates through the orthogonal transform. Moreover, the variable step size which is normalized by the transformed signal power is introduced into the BP algorithm to realize the orthonormal transform. Through the computer simulation, it is confirmed that the introduction of the orthonormal transform is effective for speedup of convergence in the neural filter.

  • A Phasor Model with Resting States

    Teruyuki MIYAJIMA  Fumihito BAISHO  Kazuo YAMANAKA  Kazuhiko NAKAMURA  Masahiro AGU  

     
    LETTER-Biocybernetics, Neurocomputing

      Vol:
    E83-D No:2
      Page(s):
    299-301

    A new phasor model of neural networks is proposed in which the state of each neuron possibly takes the value at the origin as well as on the unit circle. A stability property of equilibria is studied in association with the energy landscape. It is shown that a simple condition guarantees an equilibrium to be asymptotically stable.

  • Evolutional Design and Training Algorithm for Feedforward Neural Networks

    Hiroki TAKAHASHI  Masayuki NAKAJIMA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E82-D No:10
      Page(s):
    1384-1392

    In pattern recognition using neural networks, it is very difficult for researchers or users to design optimal neural network architecture for a specific task. It is possible for any kinds of neural network architectures to obtain a certain measure of recognition ratio. It is, however, difficult to get an optimal neural network architecture for a specific task analytically in the recognition ratio and effectiveness of training. In this paper, an evolutional method of training and designing feedforward neural networks is proposed. In the proposed method, a neural network is defined as one individual and neural networks whose architectures are same as one species. These networks are evaluated by normalized M. S. E. (Mean Square Error) which presents a performance of a network for training patterns. Then, their architectures evolve according to an evolution rule proposed here. Architectures of neural networks, in other words, species, are evaluated by another measurement of criteria compared with the criteria of individuals. The criteria assess the most superior individual in the species and the speed of evolution of the species. The species are increased or decreased in population size according to the criteria. The evolution rule generates a little bit different architectures of neural network from superior species. The proposed method, therefore, can generate variety of architectures of neural networks. The designing and training neural networks which performs simple 3 3 and 4 4 pixels which include vertical, horizontal and oblique lines classifications and Handwritten KATAKANA recognitions are presented. The efficiency of proposed method is also discussed.

  • Signature Pattern Recognition Using Moments Invariant and a New Fuzzy LVQ Model

    Payam NASSERY  Karim FAEZ  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:12
      Page(s):
    1483-1493

    In this paper we have introduced a new method for signature pattern recognition, taking advantage of some image moment transformations combined with fuzzy logic approach. For this purpose first we tried to model the noise embedded in signature patterns inherently and separate it from environmental effects. Based on the first step results, we have performed a mapping into the unit circle using the error least mean square (LMS) error criterion, to get ride of the variations caused by shifting or scaling. Then we derived some orientation invariant moments introduced in former reports and studied their statistical properties in our special input space. Later we defined a fuzzy complex space and also a fuzzy complex similarity measure in this space and constructed a new training algorithm based on fuzzy learning vector quantization (FLVQ) method. A comparison method has also been proposed so that any input pattern could be compared to the learned prototypes through the pre-defined fuzzy similarity measure. Each set of the above image moments were used by the fuzzy classifier separately and the mis-classifications were detected as a measure of error magnitude. The efficiency of the proposed FLVQ model has been numerically shown compared to the conventional FLVQs reported so far. Finally some satisfactory results are derived and also a comparison is made between the above considered image transformations.

  • A New Constructive Compound Neural Networks Using Fuzzy Logic and Genetic Algorithm 1 Application to Artificial Life

    Jianjun YAN  Naoyuki TOKUDA  Juichi MIYAMICHI  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:12
      Page(s):
    1507-1516

    This paper presents a new compound constructive algorithm of neural networks whereby the fuzzy logic technique is explored as an efficient learning algorithm to implement an optimal network construction from an initial simple 3-layer network while the genetic algorithm is used to help design an improved network by evolutions. Numerical simulations on artificial life demonstrate that compared with the existing network design algorithms such as the constructive algorithms, the pruning algorithms and the fixed, static architecture algorithm, the present algorithm, called FuzGa, is efficient in both time complexity and network performance. The improved time complexity comes from the sufficiently small 3 layer design of neural networks and the genetic algorithm adopted partly because the relatively small number of layers facilitates an utilization of an efficient steepest descent method in narrowing down the solution space of fuzzy logic and partly because trappings into local minima can be avoided by genetic algorithm, contributing to considerable saving in time in the processing of network learning and connection. Compared with 54. 8 minutes of MLPs with 65 hidden neurons, 63. 1 minutes of FlexNet or 96. 0 minutes of Pruning, our simulation results on artificial life show that the CPU time of the present method reaching the target fitness value of 100 food elements eaten for the present FuzGa has improved to 42. 3 minutes by SUN's SPARCstation-10 of SuperSPARC 40 MHz machine for example. The role of hidden neurons is elucidated in improving the performance level of the neural networks of the various schemes developed for artificial life applications. The effect of population size on the performance level of the present FuzGa is also elucidated.

  • Grey Neural Network

    Chyun-Shin CHENG  Yen-Tseng HSU  Chwan-Chia WU  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:11
      Page(s):
    2433-2442

    This paper is to propose a Markov reliability model which includes the effects of permanent fault, intermittent fault, and transient fault for reliability evaluations. We also provide a new neural network and an improved training algorithm to evaluate the reliability of the fault-tolerant systems. The simulation results show that the neuro-based reliability model can converge faster than that of the other methods. The system state equations for the Markov model are a set of first-order linear differential equations. Usually, the system reliability can be evaluated from the combined state solutions. This technique is very complicated and very difficult in the complex fault-tolerant systems. In this paper, we present a Grey Models (GM(1,1), DF-GM(1,1) and ERC-GM(1,1)) to evaluate the reliability of computer system. It can obtain the system reliability more directly and simply than the Markov model. But the data number for grey model that gets minimal error is different in each time step. Therefore, a feedforward neural network is designed on the basis of more accurate prediction for the grey modeling to evaluate the reliability. Finally, the simulation results show that this technique can lead to better accuracy than the Grey Model.

  • Asynchronous Pulse Neural Network Model for VLSI Implementation

    Mitsuru HANAGATA  Yoshihiko HORIO  Kazuyuki AIHARA  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:9
      Page(s):
    1853-1859

    An asynchronous pulse neural network model which is suitable for VLSI implementation is proposed. The model neuron can function as a coincidence detector as well as an integrator depending on its internal time-constant relative to the external one, and show complex dynamical behavior including chaotic responses. A network with the proposed neurons can process spatio-temporal coded information through dynamical cell assemblies with functional synaptic connections.

  • On the Uesaka's Conjecture as to the Optimization by Means of Neural Networks for Combinatorial Problems

    Tetsuo NISHI  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:9
      Page(s):
    1811-1817

    This paper gives two kinds of functions for which Uesaka's Conjecture, stating that the globally optimum (not a local minimum) of a quadratic function F(x)=-(1/2)xtAx in the n-dimensional hypercube may be obtained by solving a differential equation, holds true, where n denotes the dimension of the vector x. Uesaka stated in his paper that he proved the conjecture only for n=2. This corresponds to a very special case of this paper. The results of this paper suggest that the conjecture really holds for a wide class of quadratic functions and therefore support the conjecture partially.

161-180hit(287hit)